201 research outputs found
Recommended from our members
Kaon to two-pion decay and pion-pion scattering from lattice QCD
In this work, we present a lattice QCD calculation of two closely related quantities: 1). The scattering phase shift for both =0 and =2 channels at seven energies in total, and 2). The =1/2, → decay amplitude ₀ and ′, the measure of direct CP violation. These two results improve our earlier calculation presented in 2015 [1]. The calculation is performed on an ensemble of 32³ × 64 lattice with ⁻¹=1.3784(68)GeV. This is a physical calculation, where the chiral symmetry breaking is controlled by the 2+1 flavor Möbius Domain Wall Fermion, and we take the physical value for both kaon and pion. The G-parity boundary condition is used and carefully tuned so that the ground state energy of the ₁₌₀ state matches the kaon mass. Three sets of interpolating operators are used, including a scalar bilinear ``σ" operator and paired single-pion bilinear operators with the constituent pions carrying various relative momenta. Several techniques, including correlated fits and a bootstrap determination of the -value have been used, and a detailed analysis of all major systematic error is performed. The scattering phase shift results are presented in Fig. 5.10 and Tab. 5.12. For the Kaon decay amplitude, we finally get Re(₀) = 2.99(0.32)(0.59) × 10⁻⁷GeV, which is consistent with the experimental value of Re(₀) = 3.3201(18) × 10⁻⁷GeV, and Im(₀) = -6.98(0.62)(1.44) × 10⁻¹¹GeV. Combined with our earlier lattice calculation of ₂ [2], we obtained Re(′/) = 21.7(2.6)(6.2)(5.0) × 10⁻⁴, which agrees well with the experimental value of Re(′/) = 16.6(2.3) × 10⁻⁴, and Re(₀)/Re(₂) = 19.9(2.3)(4.4), consistent with the experimental value of Re(₀)/Re(₂) = 22.45(6), known as the =1/2 rule
Support Vector Hazards Machine: A Counting Process Framework for Learning Risk Scores for Censored Outcomes
Learning risk scores to predict dichotomous or continuous outcomes using machine learning approaches has been studied extensively. However, how to learn risk scores for time-to-event outcomes subject to right censoring has received little attention until recently. Existing approaches rely on inverse probability weighting or rank-based regression, which may be inefficient. In this paper, we develop a new support vector hazards machine (SVHM) approach to predict censored outcomes. Our method is based on predicting the counting process associated with the time-to-event outcomes among subjects at risk via a series of support vector machines. Introducing counting processes to represent time-to-event data leads to a connection between support vector machines in supervised learning and hazards regression in standard survival analysis. To account for different at risk populations at observed event times, a time-varying offset is used in estimating risk scores. The resulting optimization is a convex quadratic programming problem that can easily incorporate non-linearity using kernel trick. We demonstrate an interesting link from the profiled empirical risk function of SVHM to the Cox partial likelihood. We then formally show that SVHM is optimal in discriminating covariate-specific hazard function from population average hazard function, and establish the consistency and learning rate of the predicted risk using the estimated risk scores. Simulation studies show improved prediction accuracy of the event times using SVHM compared to existing machine learning methods and standard conventional approaches. Finally, we analyze two real world biomedical study data where we use clinical markers and neuroimaging biomarkers to predict age-at-onset of a disease, and demonstrate superiority of SVHM in distinguishing high risk versus low risk subjects
Multiple kernel learning with random effects for predicting longitudinal outcomes and data integration
Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data
- …